exact solution
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of neural network verification. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks. We find the exact solution does not significantly improve upon the gap between PGD and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR datasets. Our results suggest there is an inherent barrier to tight verification for the large class of methods captured by our framework. We discuss possible causes of this barrier and potential future directions for bypassing it.
Wavelet-Accelerated Physics-Informed Quantum Neural Network for Multiscale Partial Differential Equations
Gupta, Deepak, Pandey, Himanshu, Behera, Ratikanta
This work proposes a wavelet-based physics-informed quantum neural network framework to efficiently address multiscale partial differential equations that involve sharp gradients, stiffness, rapid local variations, and highly oscillatory behavior. Traditional physics-informed neural networks (PINNs) have demonstrated substantial potential in solving differential equations, and their quantum counterparts, quantum-PINNs, exhibit enhanced representational capacity with fewer trainable parameters. However, both approaches face notable challenges in accurately solving the multiscale features. Furthermore, their reliance on automatic differentiation for constructing loss functions introduces considerable computational overhead, resulting in longer training times. To overcome these challenges, we developed a wavelet-accelerated physics-informed quantum neural network that eliminates the need for automatic differentiation, significantly reducing computational complexity. The proposed framework incorporates the multiresolution property of wavelets within the quantum neural network architecture, thereby enhancing the network's ability to effectively capture both local and global features of multiscale problems. Numerical experiments demonstrate that our proposed method achieves superior accuracy while requiring less than five percent of the trainable parameters compared to classical wavelet-based PINNs, resulting in faster convergence. Moreover, it offers three to five times speed-up compared to existing quantum PINNs, highlighting the potential of the proposed approach for solving challenging multiscale and oscillatory problems efficiently.
- Asia > India > Karnataka > Bengaluru (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- (3 more...)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
Reviewer
We will make this clearer in the text. "why is...revelation game the right relaxation?" We agree many relaxations are possible. We will add this as a formal theorem to the paper. "It would be useful to have more explicit discussion of the interpretation of We will add this example into the text. "useful to see an example...[with] population of faulty agents... " The school choice example can actually be thought as The RMAC bounds show precisely this. We will expand the discussion to make this point explicitly. "gap... between NP-hard exact solution..and RFP" Our results guarantee that if RFP converges then it converges to a In some cases, this may not be the global optimum. For such small cases a gap (or lack of gap) is unlikely to be representative of real world problems. "What does an element of For example, if we consider the'what would happen if we " We apologize and will fix this and other raised missing formalism. We will make this clearer in the text. "not explained how this method differs practically from [1] and [2]" We cite [1] in the text already and By contrast, our goal is to relax these strong assumptions. We will make this clearer in the text. "...it would be nice to know what new techniques/methods may be more broadly interesting.
- Europe > Russia (0.14)
- Asia > Russia (0.14)
- Asia > Middle East > Saudi Arabia (0.04)
- (2 more...)
sufficiently accurate, the solution will eventually become monotonic. In practice, we found that we usually find
We thank all the reviewers. We hope the reviewers could increase the rating if the response addressed your concerns. Theoretically, in Eq. (11), if we take Accordingly, we have revised the relevant sections of the paper by adding pertinent technical details. General Comments: All the learned models that we report performance about are certified monotonic. We will make a note of this in the paper; see also G#1 and G#2.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia > China (0.04)